N.B. you will need to have the skimage librabry installed to run this. For this reason, I have separated out the cctbx and normal python bits. Once (if?) cctbx sorts out it's build system, we can do this all in one go.
%matplotlib inline
from __future__ import division
Liberally adapted from http://scikit-image.org/docs/dev/auto_examples/applications/plot_coins_segmentation.html#example-applications-plot-coins-segmentation-py
n.b throwing out the top 0.1% of bright pixels is really important! (line 13)
import numpy as np
import matplotlib.pyplot as plt
from cPickle import load
from scipy import misc
from scipy import ndimage as ndi
import os
percentile_clipping = 99.9
data_source = 'test_data_numpy'
files = os.listdir(data_source)
images = []
hists = []
for f in files:
im = load(open(os.path.join(data_source, f), 'rb'))
t_val = np.percentile(im, percentile_clipping)
print "{}: clipping at 99.9%ile: {}".format(f, t_val)
im = im.clip(0, t_val) # Throw out the top 1% of bright spots
hists.append(np.histogram(im, bins=np.arange(0, t_val, 5)))
images.append(im)
n_per_row = 4
# 31 images: with 4/row that's 8 rows, so twice in y as in x: 40x20 then.
def get_fig_axes(n_imgs):
fig, axes = plt.subplots(n_imgs // n_per_row + 1, n_per_row, figsize=(20, 40)) # Add one so we ceiling divide.
return fig, axes
_, axes = get_fig_axes(len(images))
for n, im in enumerate(images):
ax = axes[n // n_per_row, n % n_per_row]
ax.imshow(im, cmap=plt.cm.gray, interpolation='nearest')
ax.axis('off')
fig, ax = plt.subplots(figsize=(8, 8))
for hist in hists:
ax.plot(hist[1][:-1], hist[0], lw=2)
# Parameters for image processing:
thresh = 15
bg = 35
ax.vlines(thresh, 0, 0.4e7)
ax.vlines(bg, 0, 0.4e7)
Looks like all the images have a peak around 15 ADUs. Will try this as a threshold. Most pixels are above 35, so use this for 'high'.
A note on visualization:
I am mainly using contour plots. These put a yellow line wherever the mask has a boundary between 0 (not beamstop) and 1 (beamstop). Ideally, we want a single line including all the beamstop, and nothing else.
_, axes = get_fig_axes(len(images))
for n, im in enumerate(images):
ax = axes[n // n_per_row, n % n_per_row]
ax.imshow(img_data, cmap=plt.cm.gray, interpolation='nearest')
ax.contour(img_data < thresh, [0.5], linewidths=1.2, colors='y')
ax.set_title('pixels < {}'.format(thresh))
ax.axis('off')
So this is not quite including enough of the beamstop. There are also lots of 'unfilled' bits inside the beamstop. ## Simple solution: add image-filling algorithms, and convolve with some smoothing functions Will try adding a blur and another threshold to the segmentation. Simple Solution 1. Gaussian blur with FWHM 10, then threshold of 0.1 does not work well at all. 2. Before and after binary fills and convolving with a 20 pixel top-hat is OK, but very slow. Note that this also 'finds' some bits around the edges, which is undesierable.
blur = 21
import scipy.ndimage as ndi
img_data = images[1]
_, axes = get_fig_axes(len(images))
for n, img_data in enumerate(images):
ax = axes[n // n_per_row, n % n_per_row]
t_img = np.array(img_data < thresh, dtype=int)
t_img = ndi.binary_fill_holes(t_img) # Fill small holes
t_img = ndi.filters.convolve(t_img, np.ones((blur, blur)), origin=(blur // 2, blur // 2))
t_img = ndi.binary_fill_holes(t_img) # Fill big holes
ax.imshow(img_data, cmap=plt.cm.gray, interpolation='nearest')
ax.contour(t_img, [0.5], linewidths=1.2, colors='y')
ax.axis('off')
This is a more sophisticated approach, using a combinatin of thresholds and edge detection from the skimage package. This is the approach implemented in the auto_detect_beamstop module.
The first step is to create an elevation map:
from skimage.filter import sobel
_, axes = get_fig_axes(len(images))
e_maps = []
for n, im in enumerate(images):
ax = axes[n // n_per_row, n % n_per_row]
elevation_map = sobel(im)
e_maps.append(elevation_map)
ax.imshow(elevation_map, cmap=plt.cm.jet, interpolation='nearest')
ax.axis('off')
ax.set_title('elevation_map')
ax.axis('off')
Next, we create a 'mask' which has low and high thresholds for the beamstop and for the background.
_, axes = get_fig_axes(len(images))
markers_all = []
for n, im in enumerate(images):
ax = axes[n // n_per_row, n % n_per_row]
markers = np.zeros_like(im)
markers[im < thresh] = 1
markers[im > bg] = 2
markers_all.append(markers)
ax.imshow(markers, cmap=plt.cm.spectral, interpolation='nearest')
ax.axis('off')
ax.set_title('markers')
Finally, we use the watershed transform to fill regions of the elevation map starting from the markers determined above to create a segmentation. We fill small holes after applying the algorithm to smooth things.
from skimage import morphology
import scipy.ndimage as ndimage
_, axes = get_fig_axes(len(images))
segmentations = []
for n, meh in enumerate(zip(e_maps, markers_all)):
ax = axes[n // n_per_row, n % n_per_row]
segmentation = morphology.watershed(*meh)
segmentation = ndimage.binary_fill_holes(segmentation - 1)
segmentations.append(segmentation)
ax.imshow(segmentation, cmap=plt.cm.gray, interpolation='nearest')
ax.axis('off')
ax.set_title('segmentation')
We can now visualize the final product using a contour at 0.5 (between 1 and 0 for beamstop and background)...
from skimage.color import label2rgb
_, axes = get_fig_axes(len(images))
for n, meh in enumerate(zip(segmentations, images)):
ax = axes[n // n_per_row, n % n_per_row]
segmentation, im = meh
labels_stop, _ = ndimage.label(segmentation)
image_label_overlay = label2rgb(labels_stop, image=im)
ax.imshow(im, cmap=plt.cm.gray, interpolation='nearest')
ax.contour(segmentation, [0.5], linewidths=1.2, colors='y')
ax.axis('off')
See the other notebook (demo_detect_beamstop) for a demp of the relevant python modules I wrote to implement this easily.